2,635 research outputs found

    Assessing Learning-Centered Leadership: Connections to Research, Professional Standards, and Current Practices

    Get PDF
    Describes an assessment model designed to evaluate school leaders' performance. Unlike existing tools, this new system will assess both individuals and teams, and focuses specifically on instructional leadership and behaviors that improve learning

    Observation of Fermi-energy dependent unitary impurity resonances in a strong topological insulator Bi_2Se_3 with scanning tunneling spectroscopy

    Get PDF
    Scanning tunneling spectroscopic studies of Bi_2Se_3 epitaxial films on Si (111) substrates reveal highly localized unitary impurity resonances associated with non-magnetic quantum impurities. The strength of the resonances depends on the energy difference between the Fermi level (E_F) and the Dirac point (E_D) and diverges as E_F approaches E_D. The Dirac-cone surface state of the host recovers within ~ 2Ă… spatial distance from impurities, suggesting robust topological protection of the surface state of topological insulators against high-density impurities that preserve time reversal symmetry

    A regularized smoothing Newton method for symmetric cone complementarity problems

    Get PDF
    This paper extends the regularized smoothing Newton method in vector complementarity problems to symmetric cone complementarity problems (SCCP), which includes the nonlinear complementarity problem, the second-order cone complementarity problem, and the semidefinite complementarity problem as special cases. In particular, we study strong semismoothness and Jacobian nonsingularity of the total natural residual function for SCCP. We also derive the uniform approximation property and the Jacobian consistency of the Chen–Mangasarian smoothing function of the natural residual. Based on these properties, global and quadratical convergence of the proposed algorithm is established

    Computing one-bit compressive sensing via double-sparsity constrained optimization

    Get PDF
    One-bit compressive sensing is popular in signal processing and communications due to the advantage of its low storage costs and hardware complexity. However, it has been a challenging task all along since only the one-bit (the sign) information is available to recover the signal. In this paper, we appropriately formulate the one-bit compressed sensing by a double-sparsity constrained optimization problem. The first-order optimality conditions via the newly introduced Ï„-stationarity for this nonconvex and discontinuous problem are established, based on which, a gradient projection subspace pursuit (GPSP) approach with global convergence and fast convergence rate is proposed. Numerical experiments against other leading solvers illustrate the high efficiency of our proposed algorithm in terms of the computation time and the quality of the signal recovery as well

    Global and quadratic convergence of Newton hard-thresholding pursuit

    Get PDF
    Algorithms based on the hard thresholding principle have been well studied with sounding theoretical guarantees in the compressed sensing and more general sparsity-constrained optimization. It is widely observed in existing empirical studies that when a restricted Newton step was used (as the debiasing step), the hard-thresholding algorithms tend to meet halting conditions in a significantly low number of iterations and are very efficient. Hence, the thus obtained Newton hard-thresholding algorithms call for stronger theoretical guarantees than for their simple hard-thresholding counterparts. This paper provides a theoretical justification for the use of the restricted Newton step. We build our theory and algorithm, Newton Hard-Thresholding Pursuit (NHTP), for the sparsity-constrained optimization. Our main result shows that NHTP is quadratically convergent under the standard assumption of restricted strong convexity and smoothness. We also establish its global convergence to a stationary point under a weaker assumption. In the special case of the compressive sensing, NHTP effectively reduces to some of the existing hard-thresholding algorithms with a Newton step. Consequently, our fast convergence result justifies why those algorithms perform better than without the Newton step. The efficiency of NHTP was demonstrated on both synthetic and real data in compressed sensing and sparse logistic regression

    Quadratic convergence of Smoothing Newton's method for 0/1 loss optimization

    Get PDF
    It has been widely recognized that the 0/1 loss function is one of the most natural choices for modeling classification errors, and it has a wide range of applications including support vector machines and 1-bit compressed sensing. Due to the combinatorial nature of the 0/1 loss function, methods based on convex relaxations or smoothing approximations have dominated the existing research and are often able to provide approximate solutions of good quality. However, those methods are not optimizing the 0/1 loss function directly and hence no optimality has been established for the original problem. This paper aims to study the optimality conditions of the 0/1 function minimization, and for the first time to develop Newton's method that directly optimizes the 0/1 function with a local quadratic convergence under reasonable conditions. Extensive numerical experiments demonstrate its superior performance as one would expect from Newton-type methods. Extensive numerical experiments demonstrate its superior performance as one would expect from Newton-type methods

    Quadratic convergence of smoothing Newton's method for 0/1 loss optimization

    Get PDF
    It has been widely recognized that the 0/1-loss function is one of the most natural choices for modelling classification errors, and it has a wide range of applications including support vector machines and 1-bit compressed sensing. Due to the combinatorial nature of the 0/1 loss function, methods based on convex relaxations or smoothing approximations have dominated the existing research and are often able to provide approximate solutions of good quality. However, those methods are not optimizing the 0/1 loss function directly and hence no optimality has been established for the original problem. This paper aims to study the optimality conditions of the 0/1 function minimization and for the first time to develop Newton's method that directly optimizes the 0/1 function with a local quadratic convergence under reasonable conditions. Extensive numerical experiments demonstrate its superior performance as one would expect from Newton-type methods

    Sparse recovery on Euclidean Jordan algebras

    Get PDF
    This paper is concerned with the problem of sparse recovery on Euclidean Jordan algebra (SREJA), which includes the sparse signal recovery problem and the low-rank symmetric matrix recovery problem as special cases. We introduce the notions of restricted isometry property (RIP), null space property (NSP), and s-goodness for linear transformations in s-SREJA, all of which provide sufficient conditions for s-sparse recovery via the nuclear norm minimization on Euclidean Jordan algebra. Moreover, we show that both the s-goodness and the NSP are necessary and sufficient conditions for exact s-sparse recovery via the nuclear norm minimization on Euclidean Jordan algebra. Applying these characteristic properties, we establish the exact and stable recovery results for solving SREJA problems via nuclear norm minimization

    Robust two-stage stochastic linear optimization with risk aversion

    Get PDF
    We study a two-stage stochastic linear optimization problem where the recourse function is risk-averse rather than risk neutral. In particular, we consider the mean-conditional value-at-risk objective function in the second stage. The model is robust in the sense that the distribution of the underlying random variable is assumed to belong to a certain family of distributions rather than to be exactly known. We start from analyzing a simple case where uncertainty arises only in the objective function, and then explore the general case where uncertainty also arises in the constraints. We show that the former problem is equivalent to a semidefinite program and the latter problem is generally NP-hard. Applications to two-stage portfolio optimization, material order problems, stochastic production-transportation problem and single facility minimax distance problem are considered. Numerical results show that the proposed robust risk-averse two-stage stochastic programming model can effectively control the risk with solutions of acceptable good quality
    • …
    corecore